Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a "severity" for tests (#1005) #1410

Merged
merged 1 commit into from
Apr 30, 2019

Conversation

beckjake
Copy link
Contributor

@beckjake beckjake commented Apr 23, 2019

Fixes #1005

Add an optional severity config for tests that defaults to ERROR.

A test with severity=WARN that fails looks like this:

Running with dbt=0.13.0
Found 1 models, 2 tests, 0 archives, 0 analyses, 95 macros, 0 operations, 1 seed files, 1 sources

18:28:20 | Concurrency: 4 threads (target='default2')
18:28:20 |
18:28:20 | 1 of 2 START test not_null_model_email............................... [RUN]
18:28:20 | 2 of 2 START test source_not_null_source_nulls_email................. [RUN]
18:28:20 | 1 of 2 WARN 2 not_null_model_email................................... [WARN 2 in 0.04s]
18:28:20 | 2 of 2 WARN 2 source_not_null_source_nulls_email..................... [WARN 2 in 0.04s]
18:28:20 |
18:28:20 | Finished running 2 tests in 0.57s.

Those trailing WARN 2s are yellow, like in source freshness.

A test with severity=WARN that fails with the strict flag set looks like this:

Running with dbt=0.13.0
Found 1 models, 2 tests, 0 archives, 0 analyses, 95 macros, 0 operations, 1 seed files, 1 sources

18:28:24 | Concurrency: 4 threads (target='default2')
18:28:24 |
18:28:24 | 1 of 2 START test not_null_model_email............................... [RUN]
18:28:24 | 2 of 2 START test source_not_null_source_nulls_email................. [RUN]
18:28:24 | 2 of 2 FAIL 2 source_not_null_source_nulls_email..................... [FAIL 2 in 0.04s]
18:28:24 | 1 of 2 FAIL 2 not_null_model_email................................... [FAIL 2 in 0.04s]
18:28:24 |
18:28:24 | Finished running 2 tests in 0.62s.

Those trailing FAIL 2s are red.

I also refactored some of the schema test parsing code and made a TestBuilder class to encapsulate some of the kind of hairy logic that's currently involved in the source/ref test split. I think that came out super nicely.

I think this is mostly done, just want to trigger Windows tests to make sure everything went ok there.

@beckjake beckjake force-pushed the feature/test-severity branch 4 times, most recently from f8633e9 to 88edf14 Compare April 23, 2019 20:11
@beckjake beckjake marked this pull request as ready for review April 23, 2019 21:03
Copy link
Contributor

@drewbanin drewbanin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really cool - I like the work that you did around TestBuilders and the code works super well.

In playing around with this, I think I have some reservations about parsing the severity out of the test name. While it's really slick for string-type tests, I don't think it holds up for dict-type tests. Eg:

version: 2

models:
  - name: my_model
    columns:
      - name: id
        tests:
          - unique, severity=ERROR
          - relationships, severity=WARN:
              from: id
              to: ref('other_model')
              field: name

The unique example is great and intuitive, but the relationships example feels pretty strange to me. Having played around with this for a few minutes, I think I'd prefer supplying the severity config as an argument to the test, eg:

models:
  - name: my_model
    columns:
      - name: id
        tests:
          - unique:
              severity: ERROR
          - relationships:
              from: id
              to: ref('other_model')
              field: name
              severity: WARN

This will definitely mask any macro args named severity, which is unpleasant, but I think we could work around it by supporting args like __severity__, or dbt.severity, or similar if need be.

The - unique, severity=ERROR case is super nice, and a lot more pleasant than adding an arg to the unique test config. I don't think it would make sense to support both syntaxes, but I just want to make it clear that I need to sleep on this one.

Let's chat about what our options are here

return modifiers

for entry in modifiers_value.split(','):
if '=' not in entry:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this works as expected, the following spec compiles and runs whereas I would expect to see a compiler error:

models:
    - name: my_model
      columns:
          - name: id
            tests:
                - unique, invalid

@beckjake beckjake force-pushed the feature/test-severity branch 3 times, most recently from 8ca0d15 to 0ce2192 Compare April 26, 2019 03:49
Copy link
Contributor

@drewbanin drewbanin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One comment here, but otherwise super happy with where we ended up! I tested this a bunch and it's rock-solid -- I think a lot of folks are going to be excited to deploy this in their projects!

core/dbt/node_runners.py Outdated Show resolved Hide resolved
A small refactor to make test parsing easier to modify
add concept of test modifier kwargs, pass them through to config
plug the severity setting into test result handling
Update existing tests
Add integration tests
severity settings for data tests, too
@beckjake beckjake merged commit 154aae5 into dev/wilt-chamberlain Apr 30, 2019
@beckjake beckjake deleted the feature/test-severity branch April 30, 2019 16:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add severity granularity options for tests
2 participants